191 research outputs found

    Let Google index your media fragments

    No full text
    Current multimedia applications in Web 2.0 have generated a massive amount of multimedia resources, but most search results for multimedia resources still focus on the whole re-source level. Media fragments expose the inside content of multimedia resources for annotations, but they are yet fully explored and indexed by major search engines. W3C has published Media Fragment 1.0 as a standard way to describe media fragments on the Web. In this proposal, we make use of Google's Ajax Application Crawler to index media fragments represented by Media Fragment URIs. Each media fragment with related annotations will have an individual snapshot page, which could be indexed by the crawler. Initial evaluation has shown that the snapshot pages are successfully fetched by Googlebot and we are expecting more media fragments to be indexed using this method, so that the search for multimedia resources would be more efficient

    HTML5 video on mobile browsers

    No full text
    This paper reports on research investigating the current ability of HTML5 to play video in mobile browsers. Smartphones and the Mobile Internet are rapidly becoming an important platform for access to information anytime and anywhere. HTML5, the new HTML standard incorporates features like video playback that have been previously dependent on third-party browser plug-ins but there are no browsers that currently provide 100% support for HTML5. All the tests reported in this paper were carried out using smartphones with screen sizes 3.0 to 4.8 inches and the ability to replay videos of a range of formats, move directly to time points in the video and display closed captions were investigated. Key findings were that: video cannot be started programmatically; only selecting on the screen can trigger playback; no visual elements sitting over the <video> will receive click events while the video is visible (playing or paused); there are many HTML5 video players but MediaElement.js was found to currently be the open source player satisfying the greatest number of requirements

    Synote mobile HTML5 responsive design video annotation application

    No full text
    Synote Mobile has been developed as an accessible cross device and cross browser HTML5 webbased collaborative replay and annotation tool to make web-based recordings easier to access, search, manage, and exploit for learners, teachers and others. It has been developed as a new mobile HTML5 version of the award winning open source and freely available Synote which has been used since 2008 by students throughout the world to learn interactively from recordings. While most UK students now carry mobile devices capable of replaying Internet video, the majority of these devices cannot replay Synote’s accessible, searchable, annotated recordings as Synote was created in 2008 when few students had phones or tablets capable of replaying these videos

    Enhancing synote with quizzes, polls and analytics

    No full text
    Videos combined with quizzes and polls are used for educational purposes in many settings, including Massive Open Online Courses (MOOCs) and university lectures. For the authors of these resources, this often involves splitting videos up into sections with a poll or quiz after each section to gauge understanding. This approach requires the author to have video editing skills, and can be time-consuming. It would be useful if quizzes could be created and included directly into videos without the need for video editing. The resulting media could be analysed to discover how best to present information for learning. This paper describes the development of three main tools to Enhance Synote, the web based video annotation System. The Quiz Authoring Tool allows users to specify: sets of questions and polls to appear in the video, the time at which question sets should appear, and actions to be taken when questions are answered (e.g. skip back in the video if the answer is incorrect). The Questions Overlay library to allow quizzes and polls to be overlaid on Web videos in any of the main video formats, playable in the major browsers on different platforms. The Video and Quiz Analytics records and displays metrics of user behavio

    Creating enriched YouTube media fragments With NERD using timed-text

    No full text
    This demo enables the automatic creation of semantically annotated YouTube media fragments. A video is first ingested in the Synote system and a new method enables to retrieve its associated subtitles or closed captions. Next, NERD is used to extract named entities from the transcripts which are then temporally aligned with the video. The entities are disambiguated in the LOD cloud and a user interface enables to browse through the entities detected in a video or get more information. We evaluated our application with 60 videos from 3 YouTube channels

    Synote: weaving media fragments and linked data

    No full text
    While end users could easily share and tag the multimedia resources online, the searching and reusing of the inside content of multimedia, such as a certain area within an image or a ten minutes segment within a one-hour video, is still difficult. Linked data is a promising way to interlink media fragments with other resources. Many applications in Web 2.0 have generated large amount of external annotations linked to media fragments. In this paper, we use Synote as the target application to discuss how media fragments could be published together with external annotations following linked data principles. Our design solves the dereferencing, describing and interlinking methods problems in interlinking multimedia. We also implement a model to let Google index media fragments which improves media fragments' online presence. The evaluation shows that our design can successfully publish media fragments and annotations for both semantic Web agents and traditional search engines. Publishing media fragments using the design we describe in this paper will lead to better indexing of multimedia resources and their consequent findabilit

    Flowfield prediction of airfoil off-design conditions based on a modified variational autoencoder

    Full text link
    Airfoil aerodynamic optimization based on single-point design may lead to poor off-design behaviors. Multipoint optimization that considers the off-design flow conditions is usually applied to improve the robustness and expand the flight envelope. Many deep learning models have been utilized for the rapid prediction or reconstruction of flowfields. However, the flowfield reconstruction accuracy may be insufficient for cruise efficiency optimization, and the model generalization ability is also questionable when facing airfoils different from the airfoils with which the model has been trained. Because a computational fluid dynamic evaluation of the cruise condition is usually necessary and affordable in industrial design, a novel deep learning framework is proposed to utilize the cruise flowfield as a prior reference for the off-design condition prediction. A prior variational autoencoder is developed to extract features from the cruise flowfield and to generate new flowfields under other free stream conditions. Physical-based loss functions based on aerodynamic force and conservation of mass are derived to minimize the prediction error of the flowfield reconstruction. The results demonstrate that the proposed model can reduce the prediction error on test airfoils by 30% compared to traditional models. The physical-based loss function can further reduce the prediction error by 4%. The proposed model illustrates a better balance of the time cost and the fidelity requirements of evaluation for cruise and off-design conditions, which makes the model more feasible for industrial applications

    Synote Discussion. Extending Synote to support threaded discussions synchronised with recorded videos

    No full text
    Synote Discussion has been developed as an accessible cross device and cross browser HTML5 web-based collaborative replay, annotation and discussion extension of the award winning open source Synote which has since 2008 made web-based recordings easier to access, search, manage, and exploit for learners, teachers and others. While Synote enables users to create comments in ‘Synmarks’ synchronized with any point in a recording it does not support users to comment on these Synmarks in a discussion thread. Synote Discussion supports commenting on Synmarks stored as discussions in its own database and published as Linked data so they are available for Synote or other systems to use. This paper explains the requirements and design of Synote Discussion, presents the results of a usability study and summarises conclusions and future planned wor

    Synote: development of a Web-based tool for synchronized annotations

    No full text
    This paper discusses the development of a Web-based media annotation application named Synote, which addresses the important issue that while the whole of a multimedia resource on the Web can be easily bookmarked, searched, linked to and tagged, it is still difficult to search or associate notes or other resources with a certain part of a resource. Synote supports the creation of synchronized notes, bookmarks, tags, links, images and text captions. It is a freely available application that enables any user to make annotations in and search annotations to any fragment of a continuous multimedia resource in the most used browsers and operating systems. In the implementation, Synote categorized different media resources and synchronized them via time line. The presentation of synchronized resources makes full use of Web 2.0 AJAX technology to enrich interoperability for the user experience. Positive evaluation results about the performance, efficiency and effectiveness of Synote were returned when using it with students and teachers for a number of undergraduate courses
    • …
    corecore